Skip to content

Fix for loading of Kohya's Musubi tuner's Flux.2 dev lora#13189

Open
christopher5106 wants to merge 1 commit intohuggingface:mainfrom
scenario-labs:fix_lora_flux2
Open

Fix for loading of Kohya's Musubi tuner's Flux.2 dev lora#13189
christopher5106 wants to merge 1 commit intohuggingface:mainfrom
scenario-labs:fix_lora_flux2

Conversation

@christopher5106
Copy link
Contributor

I added support for Kohya's Flux.2 loras to the PR as well.

Reproducer code is:

FLUX.2 Dev

import torch
from diffusers import Flux2Pipeline

pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-dev", torch_dtype=torch.bfloat16)
pipe.load_lora_weights(
    "scenario-labs/musubi-tuner-loras", weight_name="flux2-dev_lora.safetensors",
)

FLUX.2 Klein

import torch
from diffusers import Flux2Pipeline

pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-klein-4b", torch_dtype=torch.bfloat16)
pipe.load_lora_weights(
    "scenario-labs/musubi-tuner-loras", weight_name="flux2-klein-4b_lora.safetensors",
)

Without the fix, I previously got:
No LoRA keys associated to Flux2Transformer2DModel found with the prefix='transformer'. This is safe to ignore if LoRA state dict didn't originally have any Flux2Transformer2DModel related params. You can also try specifying prefix=None to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new

With this PR, on-the-fly conversion now works for Flux.2-dev loras also.

In this implementation, I preserved the logic of _convert_kohya_flux_lora_to_diffusers(), with little differences: first I infer the max number of blocks from the state dict keys, second, I had to fix loras keys for the new modules Flux2FeedForward and Flux2ParallelSelfAttnProcessor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant